It's no secret that Siri revolutionized the way we use our phones—and as it turns out, the revolution is just beginning. The technology behind the digital personal assistant is ever-evolving to acquire more information and become increasingly effective. Just how is this data gathered . . . and how comfortable should we be with these potentially invasive methods?
The Controversial New Way to Analyze Data
Implicit personalization is a fascinating and controversial data-mining technique at the forefront of new technology. While its predecessor, explicit personalization, relies on direct techniques—such as asking a user to complete a questionnaire—implicit personalization gathers information passively.
Geo-location services are key to improving implicit personalization, as they provide crucial analyses of daily habits. By recognizing preferences such as restaurants and entertainment venues, digital PAs are able to apply both statistical averages (the general preferences of the user's demographic) and the user's individual behavior in order to most accurately specialize responses to requests—such as, say, seeking out a new café while on vacation.
One Minority Report-esque impact of this tech? Highly personalized music selection. By assessing key words in text messages and social media posts, a digital PA can create a playlist pegged to a user's current mood. Throw in locational data, and the assistant can do things like shuffle up a pump-up mix when you're at the gym.
Patterned activities are also noted. If you always order greasy Chinese food before heading to the movies, for example, the tech can make ticket and menu options more readily accessible.
At the end of the day, though, are all of these perks worth the deep mining of our personal data?
Has Tech Gone Too Far?
Some argue that in order to amass such a vast depth of knowledge on individual users, this technique crosses a critical line of privacy invasion. Defenders argue that, theoretically, this data would be mined anyway, just after users provided permission through system settings. Plus, implicit personalization has the benefit of being exponentially more accurate, as it accesses multiple data points to draw up a more comprehensive picture of a user's overall behaviors, preferences and social networks.
The Catch 22 is that competing operating systems and apps rely on personal data to improve their services, and users have come to expect such personalization. Tech companies, responding to conflicting messages from the public (Protect my privacy! Figure out what I want before I want it!) must balance on a high wire to keep customers happy. Losing consumer trust can be lethal, however, so the debate on privacy certainly hasn't gone unnoticed.
Facebook made recent headlines with its secret mood testing of nearly 700,000 users, and while the revelation evoked a healthy debate, the site has yet to see a massive exodus—which seems to suggest that if people love a certain tech service, their outrage over privacy invasion will only go so far.
Like it or not, artificial intelligence technology is becoming more common by the day, and while we should certainly be informed of excessive trespasses on our privacy, it may be time to reassess what privacy even means in the Digital Age.
Image: ThinkStock